-
Notifications
You must be signed in to change notification settings - Fork 837
Set BWA and bwamem2 index memory dynamically #6628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This one doesn't fail, but it's good practice.
9d9d274
to
f10c3f8
Compare
I like it 👍 it’ll also play nice with the new resourceLimits directive |
I was talking to @drpatelh about this earlier this week. Sounds good. Very neat if it scales in such a linear way. Should we add a baseline of additional memory? |
Closes nf-core/sarek#1377 |
What can we do for bwa mem and bwamem2 mem? |
What do you mean? |
Is it right to have these settings hardcoded in the module ? How does it interact with pipeline-level config file doing
which one takes precedence ? |
AFAIK the pipeline config takes precedence over the config hardcoded in the module. |
I was more worried that 28 GB / Gbp is still too high in my view. I use 24 GB / Gbp in my pipelines and wouldn't want nf-core to force me to waste RAM ;) I wasn't worried of |
FYI, I've just checked our LSF logs and there's been zero memory failures over the 1,698 Regardless of the scaling factor you use, I'd still keep |
I think that's a great point, definately we should definately add that in these. Any opinions on the scaling factor? Should we really double it everytime or would 1.5x be generous enough?
Power to you! 😆 I vote we go with what was mentioned in the bwamem2 issue that they're expecting it to use. Unless you have a better link to point at when people start asking why their jobs keep failing. |
We can revert this if it breaks stuff for some reason. Let's merge it and see if anyone has issues since it's gone stale except for @muffato's comments, which I addressed. |
I've noticed a small issue with this when running on a small genome with docker:
But nothing that can't be fixed with a label selector |
We could also just fix it in the module. Make sure it's not equal to 0 or less than 6. |
We could take the max between the minimal requirement and the memory we compute there |
Kept having bwamem2 index tasks that ran forever and failed.
Updated bwamem2 to use 28.B of memory per byte of fasta. Issue for reference: bwa-mem2/bwa-mem2#9
Also tracked down the required memory for bwa index while I was at it. Doesn't seem to fail because most of the genome requirements are under the necessary memory.
Not the first place where people have run into this #6628